The search functionality is under construction.

Keyword Search Result

[Keyword] neural network(855hit)

841-855hit(855hit)

  • On Collective Computational Properties of T-Model and Hopfield Neural Networks

    Okihiko ISHIZUKA  Zheng TANG  Akihiro TAKEI  Hiroki MATSUMOTO  

     
    PAPER-Neural Network Design

      Vol:
    E75-A No:6
      Page(s):
    663-669

    This paper extends an earlier study on the T-Model neural network to its collective computational properties. We present arguments that it is necessary to use the half-interconnected T-Model networks rather than the fully-interconnected Hopfield model networks. The T-Model has been generated in response to a number of observed weaknesses in the Hopfield model. This paper identities these problems and show how the T-Model overcomes them. The T-Model network is essentially a feedforward network which does not produce a local minimum for computations. A concept for understanding the dynamics of the T-Model neural circuit is presented and its performance is also compared with the Hopfield model. The T-Model neural circuit is implemented and tested with standard CMOS technology. Simulations and experiments show that the T-Model allows immense collective network computations and does not produce a local minimum. High densities comparable to that of the Hopfield model implementations have also been achieved.

  • Image Compression and Regeneration by Nonlinear Associative Silicon Retina

    Mamoru TANAKA  Yoshinori NAKAMURA  Munemitsu IKEGAMI  Kikufumi KANDA  Taizou HATTORI  Yasutami CHIGUSA  Hikaru MIZUTANI  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    586-594

    Threre are two types of nonlinear associative silicon retinas. One is a sparse Hopfield type neural network which is called a H-type retina and the other is its dual network which is called a DH-type retina. The input information sequences of H-type and HD-type retinas are given by nodes and links as voltages and currents respectively. The error correcting capacity (minimum basin of attraction) of H-type and DH-type retinas is decided by the minimum numbers of links of cutset and loop respectively. The operation principle of the regeneration is based on the voltage or current distribution of the neural field. The most important nonlinear operation in the retinas is a dynamic quantization to decide the binary value of each neuron output from the neighbor value. Also, the edge is emphasized by a line-process. The rates of compression of H-type and DH-type retinas used in the simulation are 1/8 and (2/3) (1/8) respectively, where 2/3 and 1/8 mean rates of the structural and binarizational compression respectively. We could have interesting and significant simulation results enough to make a chip.

  • Neural Networks Applied to Speech Recognition

    Hiroaki SAKOE  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    546-551

    Applications of neural networks are prevailing in speech recognition research. In this paper, first, suitable role of neural network (mainly back-propagation based multi-layer type) in speech recognition, is discussed. Considering that speech is a long, variable length, structured pattern, a direction, in which neural network is used in cooperation with existing structural analysis frameworks, is recommended. Activities are surveyed, including those intended to cooperatively merge neural networks into dynamic programming based structural analysis framework. It is observed that considerable efforts have been paid to suppress the high nonlinearity of network output. As far as surveyed, no experiment in real field has been reported.

  • Separating Capabilities of Three Layer Neural Networks

    Ryuzo TAKIYAMA  

     
    SURVEY PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    561-567

    This paper reviews the capability of the three layer neural network (TLNN) with one output neuron. The input set is restricted to a finite subset S of En, and the TLNN implements a function F such as F : S I={1, -1}, i,e., F is a dichotomy of S. How many functions (dichotomies) can it compute by appropriately adjusting parameters in the TLNN? Brief historical review, some theorems on the subject obtained so far, and related topics are presented. Several open problems are also included.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    376-382

    Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.

  • Information Geometry of Neural Networks

    Shun-ichi AMARI  

     
    INVITED PAPER

      Vol:
    E75-A No:5
      Page(s):
    531-536

    Information geometry is a new powerful method of information sciences. Information geometry is applied to manifolds of neural networks of various architectures. Here is proposed a new theoretical approach to the manifold consisting of feedforward neural networks, the manifold of Boltzmann machines and the manifold of neural networks of recurrent connections. This opens a new direction of studies on a family of neural networks, not a study of behaviors of single neural networks.

  • Coupling of Memory Search and Mental Rotation by a Nonequilibrium Dynamics Neural Network

    Jun TANI  Masahiro FUJITA  

     
    PAPER-Neural Systems

      Vol:
    E75-A No:5
      Page(s):
    578-585

    This paper introduces a modeling of the human rotation invariant recognition mechanism at the neural level. In the model, mechanisms of memory search and mental rotation are realized in the process of minimizing the energy of a bi-directional connection network. The thrust of the paper is to explain temporal mental activities such as successive memory retrievals and continuous mental rotation in terms of state transitions of collective neurons based on nonequilibrium dynamics. We conclude that regularities emerging in the dynamics of intermittent chaos lead the recognition process in a structural and meaningful way.

  • Fractal Dimension of Neural Networks

    Ikuo MATSUBA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    363-365

    A theoretical conjecture on fractal dimensions of a dendrite distribution in neural networks is presented on the basis of the dendrite tree model. It is shown that the fractal dimensions obtained by the model are consistent with the recent experimental data.

  • Principal Component Analysis by Homogeneous Neural Networks, Part : The Weighted Subspace Criterion

    Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:3
      Page(s):
    366-375

    Principal Component Analysis (PCA) is a useful technique in feature extraction and data compression. It can be formulated as a statistical constrained maximization problem, whose solution is given by unit eigenvectors of the data covariance matrix. In a practical application like image compression, the problem can be solved numerically by a corresponding gradient ascent maximization algorithm. Such on-line algoritms can be good alternatives due to their parallelism and adaptivity to input data. The algorithms can be implemented in a local and homogeneous way in learning neural networks. One example is the Subspace Network. It is a regular layer of parallel artificial neurons with a learning rule that is completely homogeneous with respect to the neurons. However, due to the complete homogeneity, the learning rule does not converge to the unique basis given by the dominant eigenvectors, but any basis of this eigenvector subspace is possible. In many applications like data compression, the subspace is not sufficient but the actual eigenvectors or PCA coefficient vectors are needed. A new criterion, called the Weighted Subspace Criterion, is proposed, which makes a small symmetry-breaking change to the Subspace Criterion. Only the true eigenvectors are solutions. Making the corresponding change to the learning rule of the Subspace Network gives a modified learning rule, which can be still implemented on a homogeneous network architecture. In learning, the weight vectors will tend to the true eigenvectors.

  • Analog VLSI Implementation of Adaptive Algorithms by an Extended Hebbian Synapse Circuit

    Takashi MORIE  Osamu FUJITA  Yoshihito AMEMIYA  

     
    PAPER

      Vol:
    E75-C No:3
      Page(s):
    303-311

    First, a number of issues pertaining to analog VLSI implementation of Backpropagation (BP) and Deterministic Boltzmann Machine (DBM) learning algorithms are clarified. According to the results from software simulation, a mismatch between the activation function and derivative generated by independent circuits degrades the BP learning performance. The perfomance can be improved, however, by adjusting the gain of the activation function used to obtain the derivative, irrespective of the original activation function. Calculation errors embedded in the circuits also degrade the learning preformance. BP learning is sensitive to offset errors in multiplication in the learning process, and DBM learning is sensitive to asymmetry between the weight increment and decrement processes. Next, an analog VLSI architecture for implementing the algorithms using common building block circuits is proposed. The evaluation results of test chips confirm that synaptic weights can be updated up to 1 MHz and that a resolution exceeding 14 bits can be attained. The test chips successfully perform XOR learning using each algorithm.

  • An Application of Dynamic Channel Assignment to a Part of a Service Area of a Cellular Mobile Communication System

    Keisuke NAKANO  Masaharu YOKONO  Masakazu SENGOKU  Yoshio YAMAGUCHI  Shoji SHINODA  Seiichi MOTOOKA  Takeo ABE  

     
    PAPER

      Vol:
    E75-A No:3
      Page(s):
    369-379

    In general, dynamic channel assignment has a better performance than fixed channel assignment in a cellular mobile communication system. However, it is complex to control the system and a lot of equipments are required in each cell when dynamic channel assignment is applied to a large service area. Therefore, it is effective to limit the size of the service area in order to correct the defects of dynamic channel assignment. So, we propose an application of dynamic channel assignment to a part of a service area when fixed channel assignment is applied to the remaining part of the area. In the system, the efficiency of channel usage in some cells sometimes becomes terribly low. The system has such a problem to be improved. We show that the rearrangement of the channel allocation is effective on the problem.

  • Annealing by Perturbing Synapses

    Shiao-Lin LIN  Jiann-Ming WU  Cheng-Yuan LIOU  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:2
      Page(s):
    210-218

    By close analogy of annealing for solids, we devise a new algorithm, called APS, for the time evolution of both the state and the synapses of the Hopfield's neural network. Through constrainedly random perturbation of the synapses of the network, the evolution of the state will ignore the tremendous number of small minima and reach a good minimum. The synapses resemble the microstructure of a network. This new algorithm anneals the microstructure of the network through a thermal controlled process. And the algorithm allows us to obtain a good minimum of the Hopfield's model efficiently. We show the potential of this approach for optimization problems by applying it to the will-known traveling salesman problem. The performance of this new algorithm has been supported by many computer simulations.

  • Optical Information Processing Systems

    W. Thomas CATHEY  Satoshi ISHIHARA  Soo-Young LEE  Jacek CHROSTOWSKI  

     
    INVITED PAPER

      Vol:
    E75-A No:1
      Page(s):
    28-37

    We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.

  • Optical Information Processing Systems

    W. Thomas CATHEY  Satoshi ISHIHARA  Soo-Young LEE  Jacek CHROSTOWSKI  

     
    INVITED PAPER

      Vol:
    E75-C No:1
      Page(s):
    26-35

    We review the role of optics in interconnects, analog processing, neural networks, and digital computing. The properties of low interference, massively parallel interconnections, and very high data rates promise extremely high performance for optical information processing systems.

  • Connected Associative Memory Neural Network with Dynamical Threshold Function

    Xin-Min HUANG  Yasumitsu MIYAZAKI  

     
    PAPER-Bio-Cybernetics

      Vol:
    E75-D No:1
      Page(s):
    170-179

    This paper presents a new connected associative memory neural network. In this network, a threshold function which has two dynamical parameters is introduced. After analyzing the dynamical behaviors and giving an upper bound of the memory capacity of the conventional connected associative memory neural network, it is demonstrated that these parameters play an important role in the recalling processes of the connected neural network. An approximate method of evaluationg their optimum values is given. Further, the optimum feedback stopping time of this network is discussed. Therefore, in our network, the recalling processes are ended at the optimum feedback stopping time whether a state energy has been local minimum or not. The simulations on computer show that the dynamical behaviors of our network are greatly improved. Even though the number of learned patterns is so large as the number of neurons, the statistical properties of the dynamical behaviors of our network are that the output series of recalling processes approach to the expected patterns on their initial inputs.

841-855hit(855hit)